Goto

Collaborating Authors

 ontap ai


Deepth Dinesan, Director, NetApp, exploring innovation and outcomes with ONTAP AI at the DVC

#artificialintelligence

Deepth Dinesan, Director of AI, Robotics & Quantum Computing at NetApp throws light on the fundamental aspects of AI algorithms, having access to large amounts of data and how at the DVC, NetApp and NVIDIA are using ONTAP AI to explore outcomes based on the use of analytics and AI.


Deep Dive into ONTAP AI Performance and Sizing NetApp Blog

#artificialintelligence

In my previous blog, I talked about the significant advantages that ONTAP AI provides for anyone that wants to deploy AI infrastructure quickly and get results fast. ONTAP AI is designed to deliver superior scalability and performance, offering up to 25x greater raw capacity, and 6x greater I/O performance. Last time, I mentioned that NVIDIA and NetApp have done a lot of work to characterize the performance of ONTAP AI. This time I'm going to dig into ONTAP AI performance metrics as promised. ONTAP AI compute performance scales through the addition of NVIDIA DGX-1 servers.


Land of Robots Meets AI at GTC Japan 2018

#artificialintelligence

The intersection of robotics and artificial intelligence (AI) is poised to transform advanced manufacturing, so it's not surprising that robotics was a major theme of GTC Japan 2018. NVIDIA's 10th annual GPU technology conference in the country focused on addressing the rapidly growing AI needs of Japan's manufacturing leaders. Where GTC 2018 Silicon Valley focused on training performance, this conference also emphasized technologies to deliver the inferencing performance necessary for autonomous robots and vehicles. NVIDIA CEO Jensen Huang announced a range of new products including NVIDIA AGX embedded AI computers for autonomous machines and the NVIDIA Tensor T4 GPU and TensorRT software for inferencing. Attendance and interest in the show has grown steadily every year.


5 Key Advantages of AI Data Storage NetApp Blog

#artificialintelligence

In this blog series, I've focused on how NetApp can help you streamline your artificial intelligence projects. With technologies and services for managing data everywhere, NetApp is well positioned to solve your AI data challenges. Built on our partnership with NVIDIA and powered by NVIDIA DGX supercomputers and NetApp all-flash storage, ONTAP AI lets you simplify, accelerate, and scale your AI data pipeline to gain deeper understanding in less time. Combining Data Fabric enabled NetApp storage with GPU-accelerated NVIDIA computing systems results in capabilities that aren't available from other turnkey AI solutions, on-premises or in the cloud. Here are five of the key advantages of ONTAP AI.


NetApp gives AI the FlexPod treatment

#artificialintelligence

One of NetApp's biggest successes is its FlexPod reference architectures for on-premises infrastructure comprising NetApp arrays and data fabric software, plus Cisco servers and networking kit. Analyst firm IDC says the product accounts for a third of the converged systems market and over US$2 billion of annual revenue. FlexPods are so good at what they do that Microsoft will deploy them for its VMware-on-Azure service. This time around it's teamed with NVIDIA, which makes AI-centric servers called "DGX" that pack a pair of Xeons and up to 16 Tesla 100 GPUs. NetApp believes that users want to start testing and/or using AI, but are held back by on-premises infrastructure that's not up to the job and a fear that building the right hardware stack will be complex and costly. The new "ONTAP AI proven architecture" attempts to change that, by explaining how to build rigs based on NetApp's new high-end AFF A800 array, Cisco networking and NVIDIA's DGX servers.


NetApp and Nvidia announce Ontap AI, an AI data platform for enterprise

#artificialintelligence

The data on which they're trained is more important than the models themselves, some would argue, which is one of the reasons IDC predicts that more than 44 zettabytes of digital data will be created by 2020. Thankfully, the rise of big data has coincided with a continued decline in cloud storage pricing, motivated in part by cheaper media costs, better management tools, and innovations in object storage. But not all cloud storage providers are created equal. Some lack the fine-grain management tools required to collate, process, and transfer AI model data quickly and efficiently. And not all enterprises have storage stacks optimized for data science workflows. Nvidia and data storage company NetApp today jointly announced what they believe is a solution: Ontap AI, which they describe as an "AI-proven architecture."


How NetApp & NVIDIA Can Accelerate Your AI Journey NetApp Blog

#artificialintelligence

Visionaries in virtually all industries are looking for ways to apply artificial intelligence (AI) to enable new customer touchpoints, reinvent the customer experience, drive business value--and even change the world. Of the many AI use cases in development, here are a few we're focused on: Unfortunately, many organizations still underestimate how much AI depends on an ability to marshal and manage vast quantities of data. To help put your deep learning projects on a path to achieve real business impact, NetApp is today announcing NetApp ONTAP AI proven architecture. Powered by NVIDIA DGX supercomputers and NetApp all-flash storage, ONTAP AI lets you simplify, accelerate, and scale the data pipeline needed for AI to gain deeper understanding in less time. For example, adding a few sensors to an asthma inhaler opens a huge opportunity to correlate usage and location information among patients.